perm filename SCIENT.5[LET,JMC] blob sn#880976 filedate 1990-01-10 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\input jmclet[let,jmc]
C00009 ENDMK
CāŠ—;
\input jmclet[let,jmc]
\jmclet
\address
Editor,
Scientific American
415 Madison Avenue
New York, New York  10017
\body

Sir:

	John Searle's Chinese room in which a man interprets
rules for conversing in Chinese is analogous to the common
situation in which a computer obeys a program that is
interpreting a program in higher level programming language.
Searle's mistake is like confusing the capabilities of the programs
at different levels and to confuse their capabilities with those
of the computer.  The process the man is interpreting may know
Chinese without the man doing the interpreting knowing it.
Whether it would have to know Chinese in any reasonable
sense depends on the level of conversation required.

	Until the present article, I always considered
Searle's axioms too vague for detailed comment, but this time
he has illustrated his axiom 3

	{\it ``Syntax by itself is neither constitutive of
nor sufficient for semantics.''}

\noindent He tells us the man in
the Chinese room could interpret the Chinese conversation
as a report of a chess game or as a stock market prediction.
This is an empirically testable statement, and both experience
and theory tell us that it's false.

	I challenge anyone to take a full page of conversation
from a Chinese novel and turn it into a stock market prediction
by changing the interpretations of Chinese characters.
Here's the difficulty.  After concocting new
interpretations for the Chinese characters in one sentence, he
will encounter them again in different relations in other
sentences and won't be able to make a sensible interpretation
that fits very many of the occurrences.  That's exactly what
happens when one starts to solve a cryptogram with some bad
guesses.

	There has evolved a tight relation between the syntax and
the semantics of any natural language.  This relation is needed
so that a person can understand a text as he hears it or reads it
and so that children can learn their native languages.  It
results in cryptograms having essentially unique solutions.  It's
what makes possible deciphering ancient inscriptions.  Relevant
scientific theories include the statistical theory of
communication originated by Shannon in the 1940s, and another is
the theory of algorithmic complexity originated by Solomonoff,
Chaitin and Kolmogorov.  Both have been treated in {\it Scientific
American}.  Equally close relations will characterize languages
used internally by reasoning computers.

	Children being taught their native languages have
situational cues in addition to the linguistic ones.  However, if
the child's mother points at the cat and says, ``kitty'', she
could very well mean finger.  When she then points at the dog and
says ``doggie'', the finger interpretation becomes less tenable.
If syntax and semantics were really unconnected, the
whole drama could be given a different interpretation.

	Artificial intelligence is some distance from programs
that can carry out intelligent extended general conversations.
Such programs will require extensive databases of facts about
the world and about language.  When that day comes, I suppose
philosophers will still argue about whether the programs
really know the true sentences they utter or really understand
the languages in which they communicate.  The computer scientists,
however, will {\it probably} have had to analyze what capabilities
are involved in understanding a language.

	I wrote {\it ``probably''} because of the regrettable
possibility that some giant learning machine will become
intelligent without its builders understanding any more about
intelligence than they do today.  However, I think that those of
us trying to achieve intelligence by understanding intelligence
are ahead of them.

\closing
Sincerely,
John McCarthy
\endletter
\end